17 research outputs found

    Desarrollo de una arquitectura para robots móviles autónomos : aplicación a un sistema de navegación topológica

    Get PDF
    El objetivo de esta tesis es el desarrollo de una arquitectura para el control de robots móviles autónomos y su aplicación al caso de navegación topológica. Para la elaboración de la arquitectura se ha tenido en cuenta como son los procesos mentales en los humanos. La arquitectura desarrollada es una arquitectura que consta de dos noveles; uno deliberativo y otro automático. Por ello la arquitectura ha disodenominada Ad (Automatic-Deliverative Arachitecture). En el nivel de liberativo se encuentran aquellas habilidades que requieren tiempo de cómputo elevado como consecuencia de razonamientos a alto nivel. Estas habilidades son gestionadas por un secuenciador existente en este nivel, que activa y desactiva las distintas habilidades deliberativas. En el nivel automático se encuntran las habilidades que interactúan con los sensores y actuadores del robot. Esta habilidades on gestionadas por habilidades del nivel deliberativo y se encargan del movimiento del robot y de activar los sistemas auxiliares necesarios para la percepción del entorno. La respuesta de las habilidades automáticas son más rápidas que la de las habilidades deliberativas. Dichas arquitectura ha sido aplicada a un novedoso sistema de navegación topológica denominado EDN (Event Driven Navigation). Este sistema se basa en la información contenida en un tipo original de mapa topológico: La carta de navegación. En dicha carta de navegación la información se guarda en forma de nodos y arcos. Los nodos se corresponden con eventos sensoriales y los arcos con acciones. Finalmente, se ha demostrado experimentalmente la viabilidad de la nueva arquitectura y del sistema de navegación implementado sobre ella. _________________________________________________The objective of this thesis is the development of an architecture for the control of mobile robots and its aplication to topological navigation. To create this architecture we took into account how mental processes are performed in humans. The developed architecture has two levels: one is deliberative and the other is automatic. For this reason, we have named this architecture AD. (Automatic-Deliberative Architecture) The deliberative leve! includes those abilities that require a high computing time because of the high level reasoning. A sequencer located on the same level manages those abilities. The sequencer activates and desactivates the different deliberative abilities. The autornatic !evel includes the abilities that interact with the robot? s sensors and actuators. These abilities are managed by the abilities of the deliberative leve! and are in charge of the movement of the robot and of the auxiliary systems that perceive the environment. The response of the automatic abilities is faster than that of the deliberative abilities. This architecture has been applied in a new navigation system called EDN (Event Driven Navigation). This system is based on the information included in an original topological map: the navigation chart. In this navigation chart the information is kept in node and edge shapes. The nodes correspond to the sensorial evenis and the edges correspond to actions. Finaily, we demonstrate the feasibility of the new architecture as well as the navigation system implemented in it

    A kinematic controller for liquid pouring between vessels modelled with smoothed particle hydrodynamics

    Get PDF
    In robotics, the task of pouring liquids into vessels in non-structured or domestic spaces is an open field of study. A real time, fluid dynamic simulation, based on smoothed particle hydrodynamics (SPH), together with solid motion kinematics, allow for a closed loop control of pouring. In the first place, a control criterion related with the behavior of the liquid free surface is established to handle sloshing, especially in the initial phase of pouring to prevent liquid adhesion over the vessel rim. A 2-D, free surface SPH simulation is implemented on a graphic processing unit (GPU) to predict the liquid motion with real-time capability. The pouring vessel has a single degree of freedom of rotation, while the catching vessel has a single degree of freedom of translation, and the control loop handles the tilting angle of the pouring vessel. In this work, a two-stage pouring method is proposed, differentiating an initial phase where sloshing is particularly relevant, and a nearly constant outflow phase. For control purposes, the free outflow trajectory was simplified and modelled as a free falling solid with an initial velocity at the vessel crest, as calculated by the SPH simulation. As the first stage of pouring is more delicate, a novel slosh induction method (SIM) is proposed to overcome spilling issues during initial tilting in full filled vessels. Both robotic control and fluid modelling showed good results at multiples initial vessel filling heights

    Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    Get PDF
    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.The research leading to these results has received funding from the ARCADIA project DPI2010-21047-C02-01, funded by CICYT project grant on behalf of Spanish Ministry of Economy and Competitiveness and from the RoboCity2030-II project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    Object detection applied to indoor environments for mobile robot navigation

    Get PDF
    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos, fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and co-funded by Structural Funds of the EU and NAVEGASE-AUTOCOGNAV project (DPI2014-53525-C3-3-R), funded by Ministerio de Economía y competitividad of Spain

    Semantic information for robot navigation: a survey

    Get PDF
    There is a growing trend in robotics for implementing behavioural mechanisms based on human psychology, such as the processes associated with thinking. Semantic knowledge has opened new paths in robot navigation, allowing a higher level of abstraction in the representation of information. In contrast with the early years, when navigation relied on geometric navigators that interpreted the environment as a series of accessible areas or later developments that led to the use of graph theory, semantic information has moved robot navigation one step further. This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems. The techniques involved have to deal with a range of tasks from modelling the environment and building a semantic map, to including methods to learn new concepts and the representation of the knowledge acquired, in many cases through interaction with users. As understanding the environment is essential to achieve high-level navigation, this paper reviews techniques for acquisition of semantic information, paying attention to the two main groups: human-assisted and autonomous techniques. Some state-of-the-art semantic knowledge representations are also studied, including ontologies, cognitive maps and semantic maps. All of this leads to a recent concept, semantic navigation, which integrates the previous topics to generate high-level navigation systems able to deal with real-world complex situationsThe research leading to these results has received funding from HEROITEA: Heterogeneous 480 Intelligent Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish 481 Ministerio de Economía y Competitividad. The research leading to this work was also supported project "Robots sociales para estimulacón física, cognitiva y afectiva de mayores"; funded by the Spanish State Research Agency under grant 2019/00428/001. It is also funded by WASP-AI Sweden; and by Spanish project Robotic-Based Well-Being Monitoring and Coaching for Elderly People during Daily Life Activities (RTI2018-095599-A-C22)

    A topological navigation system for indoor environments based on perception events

    Get PDF
    The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU and NAVEGASE-AUTOCOGNAV project (DPI2014-53525-C3-3-R) and funded by Ministerio de Economía y competitividad of SPAIN

    A Semantic Labeling of the Environment Based on What People Do

    Get PDF
    In this work, a system is developed for semantic labeling of locations based on what people do. This system is useful for semantic navigation of mobile robots. The system differentiates environments according to what people do in them. Background sound, number of people in a room and amount of movement of those people are items to be considered when trying to tell if people are doing different actions. These data are sampled, and it is assumed that people behave differently and perform different actions. A support vector machine is trained with the obtained samples, and therefore, it allows one to identify the room. Finally, the results are discussed and support the hypothesis that the proposed system can help to semantically label a room.The research leading to these results has received funding from the RoboCity2030-III-CMproject (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+Den la Comunidad de Madrid and cofunded by Structural Funds of the EU and NAVEGASEAUTOCOGNAVproject (DPI2014-53525-C3-3-R), funded by Ministerio de Economía y Competitividad of Spain.The research leading to these results has received funding from the RoboCity2030-III-CMproject (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+Den la Comunidad de Madrid and cofunded by Structural Funds of the EU and NAVEGASEAUTOCOGNAVproject (DPI2014-53525-C3-3-R), funded by Ministerio de Economía y Competitividad of Spain

    A biologically inspired architecture for an autonomous and social robot

    Get PDF
    Lately, lots of effort has been put into the construction of robots able to live among humans. This fact has favored the development of personal or social robots, which are expected to behave in a natural way. This implies that these robots could meet certain requirements, for example, to be able to decide their own actions (autonomy), to be able to make deliberative plans (reasoning), or to be able to have an emotional behavior in order to facilitate human-robot interaction. In this paper, the authors present a bioinspired control architecture for an autonomous and social robot, which tries to accomplish some of these features. In order to develop this new architecture, authors have used as a base a prior hybrid control architecture (AD) that is also biologically inspired. Nevertheless, in the later, the task to be accomplished at each moment is determined by a fix sequence processed by the Main Sequencer. Therefore, the main sequencer of the architecture coordinates the previously programmed sequence of skills that must be executed. In the new architecture, the main sequencer is substituted by a decision making system based on drives, motivations, emotions, and self-learning, which decides the proper action at every moment according to robot's state. Consequently, the robot improves its autonomy since the added decision making system will determine the goal and consequently the skills to be executed. A basic version of this new architecture has been implemented on a real robotic platform. Some experiments are shown at the end of the paper.This work has been supported by the Spanish Government through the project called “Peer to Peer Robot-Human Interaction” (R2H), of MEC (Ministry of Science and Education), the project “A new approach to social robotics” (AROS), of MICINN (Ministry of Science and Innovation), the CAM Project S2009/DPI-1559/ROBOCITY2030 II, developed by the research team RoboticsLab at the University Carlos III of Madrid

    Design of an infrared imaging system for robotic inspection of gas leaks in industrial environments

    Get PDF
    Gas detection can become a critical task in dangerous environments that involve hazardous or contaminant gases, and the use of imaging sensors provides an important tool for leakage location. This paper presents a new design for remote sensing of gas leaks based on infrared (IR) imaging techniques. The inspection system uses an uncooled microbolometer detector, operating over a wide spectral bandwidth, that features both low size and low power consumption. This equipment is boarded on a robotic platform, so that wide objects or areas can be scanned. The detection principle is based on the use of active imaging techniques, where the use of external IR illumination enhances the detection limit and allows the proposed system to operate in most cases independently from environmental conditions, unlike passive commercial approaches. To illustrate this concept, a fully radiometric description of the detection problem has been developed; CO, detection has been demonstrated; and simulations of typical gas detection scenarios have been performed, showing that typical industrial leaks of CI I, are well within the detection limits. The mobile platform where the gas sensing system is going to be implemented is a robot called TurtleBot. The control of the mobile base and of the inspection device is integrated in ROS architecture. The exploration system is based on the technique of Simultaneous Localization and Mapping (SLAM) that makes it possible to locate the gas leak in the map.The authors would like to thank the RoboCity2030-II project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and co-funded by Structural Funds of the EU

    Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots

    Get PDF
    Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping, navigation, and low-level scene segmentation. However, single data type maps are not enough in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to map on different levels the surrounding environment. It exploits the benefits of several data types, counteracting the cons of each of the sensors. This research introduces several techniques to achieve mapping and navigation through indoor environments. First, a scan matching algorithm based on ICP with distance threshold association counter is used as a multi-objective-like fitness function. Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A global map is then built during SLAM, reducing the accumulated error and demonstrating better results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in 2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition system. Experiments are conducted in both simulated and real scenarios, proving the performance of the proposed algorithms.This work was supported by the funding from HEROITEA: Heterogeneous Intelligent Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish Ministerio de Economia y Competitividad, RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU. We acknowledge the R&D&I project PLEC2021-007819 funded by MCIN/AEI/ 10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and the Comunidad de Madrid (Spain) under the multiannual agreement with Universidad Carlos III de Madrid (“Excelencia para el Profesorado Universitario’—EPUC3M18) part of the fifth regional research plan 2016–2020
    corecore